Skip to content

Conversation

@wiredfool
Copy link
Member

@wiredfool wiredfool commented Jan 20, 2017

Fixes #2152.

This is a big, in progress refactor and addition.

  • I've added 4 new test items in the matrix, Alpine, Precise (12.04), Xenial (16.04), and Trusty X86 (14.04). They're all testing py2.7 for now. I particularly want the x86 part of the matrix, because we've tripped over that recently. (and there's a pr requiring x86 testing: Fix for issue #1902 Applied patch: matplotlib/matplotlib/commit/a9155… #2359) Precise and Xenial wind up testing different library versions, also useful and necessary for the tiff metadata pr Libtiff metadata writing fixes #2288.
  • I've added a repo of docker files, so they can be pulled from the dockerhub. I'm not convinced that it's tons faster to do that rather than just install from bare images.
  • I've converted a few random test docstrings so that the -v listing shows all the test names that have been run.
  • I've re-orged the .travis.yml file, refactoring some of it into scripts so that it's easier to gate on docker/non-docker. This unfortunately removes the travis_retry verb, since it's implemented in travis, not shell.
  • I'm going to at least rebase the many commits this prior to merge.

And the big one -- Tests are failing:

  • There are actual failing tests on the trusty one.
  • Precise is hanging, I think on Jpeg2k. It doesn't do it OMM, where I'm building the docker image. (go figure).

@wiredfool
Copy link
Member Author

So, this is working now. Though, I'd feel better if the tk tests were failing. Though, it looks like this would have caught the 3.4.0 issue.

Copy link
Member

@hugovk hugovk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍


  • I've added a repo of docker files, so they can be pulled from the dockerhub. I'm not convinced that it's tons faster to do that rather than just install from bare images.

Would it be worth testing and comparing both approaches?


  • I've converted a few random test docstrings so that the -v listing shows all the test names that have been run.

Perhaps skip this commit now PR #2369 is merged? Docstrings being good practice in general.


  • I've re-orged the .travis.yml file, refactoring some of it into scripts so that it's easier to gate on docker/non-docker. This unfortunately removes the travis_retry verb, since it's implemented in travis, not shell.

Another downside is you just get one timing from Travis for install.sh etc, instead of timings for each separate command, making harder to optimise slow commands (for example, some pip installs have been slow for certain Pythons, and it's been possible to identify and improve some make -jX commands).

For example, compare https://travis-ci.org/python-pillow/Pillow/jobs/193782095#L249 and https://travis-ci.org/python-pillow/Pillow/jobs/193957929#L560.

Having said that, the .travis.yml file was already quite big. But if it's possible to keep some stuff in .travis.yml, that could be useful for timings in the future, especially across different platforms and Pythons.

pyflakes *.py | tee >(wc -l)
pyflakes PIL/*.py | tee >(wc -l)
pyflakes Tests/*.py | tee >(wc -l)
fi
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These can give different results for Python 2 and Python 3, and usually take less than 10 seconds. Perhaps keep it for all? Or was it particularly slow for some of the new Dockers?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, I could have sworn that this was in the .travis.yml file that I was pulling from, but I can't find it now, and I've rebased enough that it's lost.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, I've just taken these off the dockers. (Note that TRAVIS_PYTHON_VERSION in the docker runs is the python version in travis, not in the docker image, so it's pretty much irrelevant at this point, and could fundamentally mismatch)

@wiredfool
Copy link
Member Author

wiredfool commented Jan 21, 2017

Would it be worth testing and comparing both approaches?

That was written before I pulled the compiled dependencies into the images. That saves 15-30 seconds. It's disappointing that on a local cached run, it's 10-15 seconds, but the runs on travis are measured in minutes. But there's downloading and post script setup stuff. Apart from that I think having the testing images on dockerhub is useful for others to pull and test against when debugging failures. e.g: cd ubuntu-trusty-x86 && make shell puts you in the docker image with all the dependencies. edit- it also makes is a lot faster to iterate when you're not pushing through travis.

it's been possible to identify and improve some make -jX commands

Yeah, I noticed that. make install shouldn't really be threaded, though it probably doesn't matter that much as it doesn't start a ton of subtasks.

The after_success.sh script can be easily put back in -- except that we're not running coverage on the docker ones, so it's not actually doing anything there. Install.sh and script.sh, those are either/or and will be a big mess if we test on docker for each item.

@wiredfool
Copy link
Member Author

wiredfool commented Jan 21, 2017

Also worth noting -- I'm planning on adding a virtualenv parameter to the docker args so that we can test against alternate pythons on the same image. (e.g. system py2 and py3)

And unrelated, we could probably add a make environment argument in travis to do the -j thing, so it wouldn't need to be hard coded for anyone who's got a single proc vm.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants